2,766 research outputs found

    The Quest for Citations: Drivers of Article Impact

    Get PDF
    Why do some articles become building blocks for future scholars, while many others remain unnoticed? We aim to answer this question by contrasting, synthesizing and simultaneously testing three scientometric perspectives – universalism, social constructivism and presentation – on the influence of article and author characteristics on article citations. To do so, we study all articles published in a sample of five major journals in marketing from 1990 to 2002 that are central to the discipline. We count the number of citations each of these articles has received and regress this count on an extensive set of characteristics of the article (i.e. article quality, article domain, title length, the use of attention grabbers and expositional clarity), and the author (i.e. author visibility and author personal promotion). We find that the number of citations an article in the marketing discipline receives, depends upon “what one says†(quality and domain), on “who says it†(author visibility and personal promotion) and not so much on “how one says it†(title length, the use of attention grabbers, and expositional clarity). Our insights contribute to the marketing literature and are relevant to scientific stakeholders, such as the management of scientific journals and individual academic scholars, as they strive to maximize citations. They are also relevant to marketing practitioners. They inform practitioners on characteristics of the academic journals in marketing and their relevance to decisions they face. On the other hand, they also raise challenges towards making our journals accessible and relevant to marketing practitioners: (1) authors visible to academics are not necessarily visible to practitioners; (2) the readability of an article may hurt academic credibility and impact, while it may be instrumental in influencing practitioners; (3) it remains questionable whether articles that academics assess to be of high quality are also managerially relevant.Impact;Citation Analysis;Referencing;Scientometrics;Cite

    Dynamic Congestion and Urban Equilibrium

    Get PDF

    Collaborative Infrastructures for Mobilizing Intellectual Resources: assessing intellectual bandwidth in a knowledge intensive organization

    Get PDF
    The use of intellectual assets of key professionals to provide customized goods and services is seen to be a key characteristic of knowledge intensive organizations. While knowledge management efforts have become popular in organizations that depend on the knowledge and skills of their employees, it is unclear what the benefits of such efforts are and how these intellectual resources may actually create value for the organization. At the same time, vast information and communication technology infrastructures are being implemented to tap into the diverse intellectual resources to little effect. This paper uses the Intellectual Bandwidth Model originally developed by Nunamaker et al. (2001) to investigate the extent to which do collaborative technologies support the mobilization of intellectual resources to create value for an organization. Following a investigation of the intellectual bandwidth of a large multinational consulting company, this paper provides insight into the role of technology for mobilizing intellectual resources and offers implications for developing infrastructure to support core business processes

    Technique for validating remote sensing products of water quality

    Get PDF
    Remote sensing of water quality is initiated as an additional part of the on going activities of the EAGLE2006 project. Within this context intensive in-situ and airborne measurements campaigns were carried out over the Wolderwijd and Veluwemeer natural waters. However, in-situ measurements and image acquisitions were not simultaneous. This poses some constraints on validating air/space-borne remote sensing products of water quality. Nevertheless, the detailed insitu measurements and hydro-optical model simulations provide a bench mark for validating remote sensing products. That is realized through developing a stochastic technique to quantify the uncertainties on the retrieved aquatic inherent optical properties (IOP). The output of the proposed technique is applied to validate remote sensing products of water quality. In this processing phase, simulations of the radiative transfer in the coupled atmosphere-water system are performed to generate spectra at-sensor-level. The upper and the lower boundaries of perturbations, around each recorded spectrum, are then modelled as function of residuals between simulated and measured spectra. The perturbations are parameterized as a function of model approximations/inversion, sensor-noise and atmospheric residual signal. All error sources are treated as being of stochastic nature. Three scenarios are considered: spectrally correlated (i.e. wavelength dependent) perturbations, spectrally uncorrelated perturbations and a mixed scenario of the previous two with equal probability of occurrence. Uncertainties on the retrieved IOP are quantified with the relative contribution of each perturbation component to the total error budget of the IOP. This technique can be used to validate earth observation products of water quality in remote areas where few or no in– situ measurements are available

    Predicting travel time variability for cost-benefit analysis

    Get PDF
    Unreliable travel times cause substantial costs to travelers. Nevertheless, they are not taken into account in many cost-benefit-analyses (CBA), or only in very rough ways. This paper aims at providing simple rules on how variability can be predicted, based on travel time data from Dutch highways. The paper uses two different concepts of travel time variability. They differ in their assumptions on information availability to drivers. The first measure is based on the assumption that, for a given road link and given time of the day, the expected travel time is constant across all working days (rough information: RI). In the second case, expected travel times are assumed to re ect day-specific factors such as weather conditions or weekdays (fine information: FI). For both definitions of variability, we find that the mean travel time is a good predictor of variability. On average, longer delays are associated with higher variability. However, the derivative of travel time variability with respect to delays is decreasing in delays. It can be shown that this result relates to differences in the relative shares of observed trafic 'regimes' (free- ow, congested, hyper-congested) in the mean delay. For most CBAs, no information on the relative shares of the traffic regimes is available. A non-linear model based on mean travel times can be used as an approximation

    Pricing, capacity and long-run cost functions for first-best and second-best network problems

    Get PDF
    This paper considers the use of 'long-run cost functions' for congested networks in solving second-best network problems, in which capacity and tolls are instruments. We derive analytical results both for general cost and demand functions and for specific functional forms, namely Bureau of Public Roads cost functions and constant-elasticity demand functions. The latter are also used in a numerical simulation model. We consider second-best cases where only a sub-set of links in a network is subject to tolling and/or capacity choice, and cases with and without a self-financing constraint imposed. We will demonstrate that, under certain assumptions, second-best long-run cost (or actually: generalized price) functions can be derived for most of the cases of interest, which can be used in an applied network model as a substitute for the conventional short-run user cost functions. Doing so reduces the dimensionality of the problem and should therefore be helpful in speeding up procedures for finding second-best optima. © 2009 Elsevier Ltd
    • …
    corecore